24 research outputs found

    Language-independent model transformation verification

    Get PDF
    One hinderance to model transformation verification is the large number of different MT languages which exist, resulting in a large number of different language-specific analysis tools. As an alternative, we define a single analysis process which can, in principle, analyse speci- fications in several different transformation languages, by making use of a common intermediate representation to express the semantics of trans- formations in any of these languages. Some analyses can be performed directly on the intermediate representation, and further semantic models in specific verification formalisms can be derived from it. We illustrate the approach by applying it to ATL

    High-integrity model-based development

    No full text

    Slicing Techniques for UML Models

    No full text

    Model-transformation design patterns

    No full text

    Ant-colony optimization for automating test model generation in model transformation testing

    Get PDF
    In model transformation (MT) testing, test data generation is of key importance. However, test suites are not available out of the box, and existing approaches to generate them require to provide not only the metamodel to which the models must conform, but some other domain-specific artifacts. For instance, an MT developer aiming to perform an incremental implementation of an MT may need to count on a quality test suite from the very beginning, even before all MT requirements are clear, only having the metamodels as input. We propose a black-box approach for the generation of test models where only the input metamodel of the MT is available. We propose an Ant-Colony Optimization algorithm for the search of test models satisfying the objectives of maximizing internal diversity and maximizing external diversity. We provide a tool prototype that implements this approach and generates the models in the well-established XMI interchange format. A comparison study with state-of-the-art frameworks shows that models are generated in reasonable times with low memory consumption. We empirically demonstrate the adequacy of our approach to generate effective test models, obtaining an overall mutation score above 80% from an evaluation with more than 5000 MT mutants.TED2021-130523B-I00 PID2021-125527NB-I0
    corecore